200 research outputs found

    Addition of higher order plate elements to NASTRAN

    Get PDF
    Two plate elements, the linear strain triangular membrane element CTRIM6 and the higher order plate bending element CTRPLT1, were added to NASTRAN Level 16.0. The theoretical formulation, programming details, and bulk data information pertaining to the addition of these elements are discussed. Sample problems illustrating the use of these elements are presented

    An optimality criterion for sizing members of heated structures with temperature constraints

    Get PDF
    A thermal optimality criterion is presented for sizing members of heated structures with multiple temperature constraints. The optimality criterion is similar to an existing optimality criterion for design of mechanically loaded structures with displacement constraints. Effectiveness of the thermal optimality criterion is assessed by applying it to one- and two-dimensional thermal problems where temperatures can be controlled by varying the material distribution in the structure. Results obtained from the optimality criterion agree within 2 percent with results from a closed-form solution and with results from a mathematical programming technique. The thermal optimality criterion augments existing optimality criteria for strength and stiffness related constraints and offers the possibility of extension of optimality techniques to sizing structures with combined thermal and mechanical loading

    Molecular Dynamics Simulation of Apolipoprotein E3 Lipid Nanodiscs

    Full text link
    Nanodiscs are binary discoidal complexes of a phospholipid bilayer circumscribed by belt-like helical scaffold proteins. Using coarse-grained and all-atom molecular dynamics simulations, we explore the stability, size, and structure of nanodiscs formed between the N-terminal domain of apolipoprotein E3 (apoE3-NT) and variable number of 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC) molecules. We study both parallel and antiparallel double-belt configurations, consisting of four proteins per nanodisc. Our simulations predict nanodiscs containing between 240 and 420 DMPC molecules to be stable. The antiparallel configurations exhibit an average of 1.6 times more amino acid interactions between protein chains and 2 times more ionic contacts, compared to the parallel configuration. With one exception, DMPC order parameters are consistently larger in the antiparallel configuration than in the parallel one. In most cases, the root mean square deviation of the positions of the protein backbone atoms is smaller in the antiparallel configuration. We further report nanodisc size, thickness, radius of gyration, and solvent accessible surface area. Combining all investigated parameters, we hypothesize the antiparallel protein configuration leading to more stable and more rigid nanodiscs than the parallel one

    Geometric computing and uniform grid technique

    Get PDF
    If computational geometry should play an important role in the professional environment (e.g. graphics and robotics), the data structures it advocates should be readily implemented and the algorithms efficient. In the paper, the uniform grid and a diverse set of geometric algorithms that are all based on it, are reviewed. The technique, invented by the second author, is a flat, and thus non-hierarchical, grid whose resolution adapts to the data. It is especially suitable for telling efficiently which pairs of a large number of short edges intersect. Several of the algorithms presented here exist as working programs (among which is a visible surface program for polyhedra) and can handle large data sets (i.e. many thousands of geometric objects). Furthermore, the uniform grid is appropriate for parallel processing; the parallel implementation presented gives very good speed-up results. © 1989

    Easy on that trigger dad: a study of long term family photo retrieval

    Get PDF
    We examine the effects of new technologies for digital photography on people's longer term storage and access to collections of personal photos. We report an empirical study of parents' ability to retrieve photos related to salient family events from more than a year ago. Performance was relatively poor with people failing to find almost 40% of pictures. We analyze participants' organizational and access strategies to identify reasons for this poor performance. Possible reasons for retrieval failure include: storing too many pictures, rudimentary organization, use of multiple storage systems, failure to maintain collections and participants' false beliefs about their ability to access photos. We conclude by exploring the technical and theoretical implications of these findings

    Mid-circuit qubit measurement and rearrangement in a 171^{171}Yb atomic array

    Full text link
    Measurement-based quantum error correction relies on the ability to determine the state of a subset of qubits (ancillae) within a processor without revealing or disturbing the state of the remaining qubits. Among neutral-atom based platforms, a scalable, high-fidelity approach to mid-circuit measurement that retains the ancilla qubits in a state suitable for future operations has not yet been demonstrated. In this work, we perform imaging using a narrow-linewidth transition in an array of tweezer-confined 171^{171}Yb atoms to demonstrate nondestructive state-selective and site-selective detection. By applying site-specific light shifts, selected atoms within the array can be hidden from imaging light, which allows a subset of qubits to be measured while causing only percent-level errors on the remaining qubits. As a proof-of-principle demonstration of conditional operations based on the results of the mid-circuit measurements, and of our ability to reuse ancilla qubits, we perform conditional refilling of ancilla sites to correct for occasional atom loss, while maintaining the coherence of data qubits. Looking towards true continuous operation, we demonstrate loading of a magneto-optical trap with a minimal degree of qubit decoherence.Comment: 9 pages, 6 figure

    Isolation, Characterization, and Stability of Discretely-Sized Nanolipoprotein Particles Assembled with Apolipophorin-III

    Get PDF
    Background: Nanolipoprotein particles (NLPs) are discoidal, nanometer-sized particles comprised of self-assembled phospholipid membranes and apolipoproteins. NLPs assembled with human apolipoproteins have been used for myriad biotechnology applications, including membrane protein solubilization, drug delivery, and diagnostic imaging. To expand the repertoire of lipoproteins for these applications, insect apolipophorin-III (apoLp-III) was evaluated for the ability to form discretely-sized, homogeneous, and stable NLPs. Methodology: Four NLP populations distinct with regards to particle diameters (ranging in size from 10 nm to.25 nm) and lipid-to-apoLp-III ratios were readily isolated to high purity by size exclusion chromatography. Remodeling of the purified NLP species over time at 4uC was monitored by native gel electrophoresis, size exclusion chromatography, and atomic force microscopy. Purified 20 nm NLPs displayed no remodeling and remained stable for over 1 year. Purified NLPs with 10 nm and 15 nm diameters ultimately remodeled into 20 nm NLPs over a period of months. Intra-particle chemical cross-linking of apoLp-III stabilized NLPs of all sizes. Conclusions: ApoLp-III-based NLPs can be readily prepared, purified, characterized, and stabilized, suggesting their utilit

    In-Datacenter Performance Analysis of a Tensor Processing Unit

    Full text link
    Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X - 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X - 80X higher. Moreover, using the GPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.Comment: 17 pages, 11 figures, 8 tables. To appear at the 44th International Symposium on Computer Architecture (ISCA), Toronto, Canada, June 24-28, 201
    corecore